“The opposite of what I wanted to achieve”: The lead author of the EU’s AI law reveals all


He is the main author of the EU's major AI law. He invested years of his life into this document. But now Gabriele Mazzini considers the AI Act a failure. So much so that he could no longer reconcile it with his conscience to continue working on it. He even resigned from his well-paid position in the European Commission. He tells the NZZ newspaper for the first time details how, in his view, a sensible law became a monster.
NZZ.ch requires JavaScript for important functions. Your browser or ad blocker is currently preventing this.
Please adjust the settings.
Mr. Mazzini, how did you become the lead author of the AI law?
My interest in technology regulation brought me to the European Commission in 2017. Before that, I worked in development aid in Africa and in New York startups. Both of these experiences showed me the impact of technology. At the EU, I was focused on the impact of AI on liability and ethical issues from the very beginning. When von der Leyen decided to regulate AI in 2019, I was tasked with drafting this text.
How can you imagine working on the AI Act?
I read up on the subject, talked to experts, and conceived the structure of the legislation: the idea of regulating AI products, not AI itself, and classifying them into risk categories. I led a small team; the three of us drafted the text of the EU Commission's initial proposal.
The EU's AI law aims to standardize the rules for all products containing AI. There are no rules for low-risk applications. Unacceptable applications, such as some uses of biometric real-time surveillance, are banned across the EU. And AI systems used in high-risk areas such as schools, police work, or job applications must meet quality standards.
Following the release of Chat-GPT, lawmakers added a section on such "general purpose" AI models. Manufacturers must, for example, disclose which data was used for training and mitigate "systemic risks."
It was published in April 2021. So you worked on the draft for almost three years?
The writing went relatively quickly. The major effort involved the research, coordination, and negotiations with various departments beforehand.
The AI Act has been gradually taking effect since this spring. How much does it have to do with your initial draft?
The basic idea of regulating AI like a product, i.e., requiring safety standards based on risk, was my own. But today's law is much more complex, encompasses more, and prescribes more details – often without good reason. This is already evident in its form: the original 85 articles, 89 preambles, and 9 annexes have become 113 articles, 180 preambles, and 13 annexes. In addition, there are explanatory documents. The worst part, however, is that the new text is not as clear as it should be.
What happened in between?
The short answer: Chat-GPT and enormous time pressure. But let me elaborate. In EU legislation, the Commission proposes legislation, which the Parliament and the Council of the European Union, consisting of the relevant ministers of the member states, must approve. Details are often changed. This requires discussions. The process takes time. But after the publication of Chat-GPT, there was suddenly enormous pressure on the issue. Apocalyptic scenarios circulated. Do you remember the letter from the Future of Life Institute, also signed by Elon Musk, which called for a six-month pause in AI development? That impressed many politicians and bureaucrats. Moreover, the term of office of the Parliament and von der Leyen's Commission was coming to an end, and there were fears of a completely new situation after the new elections. Everyone was ambitious to pass a law quickly so they could say: We'll take care of this for you. We're ahead of the game and have it under control.
This panic about an imminent end of the world was obviously exaggerated from today's perspective.
Yes, the discourse has changed completely. About a year ago, the Draghi Report was published, highlighting how far the EU is lagging behind when it comes to technology companies. We're concerned about overregulation. When the AI Act was being discussed, that wasn't an issue. In 2023, von der Leyen spoke in her State of the Union address about the risk that AI could wipe out humanity. That's extraordinary. But back then, the Open AI CEO also called for tough regulation in a blog post. "We built this enormously dangerous thing, listen to us so we can tell you how to protect you." That was the discourse back then.
Do you accuse the politicians and officials involved of being poorly informed about what was at stake?
Yes. Everyone involved should have taken the time to read up on it themselves, reflect, gather opposing views, and thus set their own priorities. It was already clear to me in 2023 that we were on the wrong path. Solid laws don't emerge under the influence of alarmism and time pressure.
How did you feel about it?
I felt alone. In the fall of 2023, I told my superiors that I disagreed with what was happening. But that didn't change anything. Of course, I had no power to stop anything; I was too low in the hierarchy and wasn't present at the important meetings. But I had hoped that my expert opinion would count; after all, I had worked more on this dossier than anyone else. And, more importantly, this was one of the most consequential EU laws ever. Everyone should have understood that.
In December 2023, the three institutions publicly announced that they had reached an agreement.
This was presented as a major victory. However, the approval of the EU member states was still pending. In February 2024, the AI Act cleared this hurdle as well. And I decided: I either had to be given a chance to change things or resign. Because I didn't want to spend my life implementing something I didn't believe in – and that I had no power to influence.
Her superiors obviously did not give in.
I received no response. They accepted the risk of losing the person who had worked the most on this document.
If you had stayed, would you have been promoted?
If I hadn't spoken out, I probably would have. After completing such a major project, you usually get promoted. But what good would that have done me? I would have given up my integrity. I've always enjoyed my job. There's a contagious belief in the Commission that you're working for a good cause. When you yourself feel like things are going in the wrong direction, it's hard to bear. Coincidentally, I left on the very day the AI Act came into force, August 1, 2024. I'm very relieved to be able to express my personal opinion as a citizen today.
Explain your problem with today's version of the regulation.
We started with the question: How do you promote a new technology while minimizing risks? We wanted to build trust through quality standards. Anyone who develops an AI product in a risky area must guarantee these standards. That's fine. But since technology and use cases are still evolving, we should try to regulate only what's necessary and only expand the use cases later. Moreover, many other regulations already apply to AI, for example, data protection. The new AI obligations should be well-aligned with these rules. That's not the case. I was also against regulating large language models so quickly. More time would have been needed here.
Some say it was a good move to act so quickly. This could lead to the Brussels effect, where regulation becomes a model worldwide – as happened with data protection.
I don't experience this effect. Other countries are interested in EU regulation, but don't see it as a model. You also have to be aware: The first data protection directive was drafted in 1992. Then years passed in which the concepts were developed and a culture developed. In 2012, the General Data Protection Regulation (GDPR) was proposed. And then another four years passed before it was signed into law. With the AI Act, the entire process took only three years.
Just ten years ago, the finalization of the GDPR law was simply left to the next legislative period. Was the EU Commission less afraid of populist successors back then?
I don't know what the reason is. But the old laws are clearly better, more consistent. The other new EU digital laws are also not of high quality. These days, everything seems to have to be rushed through in a hurry.
Google and Facebook used to be even less powerful. Does tech industry lobbying have a negative impact?
Personally, I never felt the pressure of the lobbying. My door was always open to anyone who wanted to get involved. And indeed, the final text of the AI Act is heavily focused on restricting technology; it's not particularly business-friendly. In that sense, the lobbying wasn't particularly successful.
There was this moment when France, Germany, and Italy criticized the regulation of general AI models, mainly because the AI hopefuls Mistral and Aleph Alpha had protested.
Unfortunately, this was interpreted at the time as an attempt by these countries to protect their industries. As a bureaucrat, I was neutral in this debate. I was simply concerned with good, evidence-based legislation. But I thought the impulse to slow things down and coordinate more internationally was right. In the general rush, however, Parliament prevailed, heavily influenced by civil society and consumer advocates. The Commission, which should have been the voice of reason, should have intervened.
What do you mean?
The Commission is not there to make decisions. That must be done by the Parliament and the Council of the EU. However, the Commission has the most resources and knowledge in the EU. It should use this expertise to position itself based on the facts and evidence – so that good laws are created. If it doesn't do this, the logic of this system collapses. Then there's no need for this entire bureaucratic apparatus.
What would you do differently if you could start over?
I wouldn't do something so important again with so little power. As for the law, I have my doubts today as to whether AI technology should even be regulated. It would have been better to close specific legal loopholes instead of relying on one big breakthrough.
And what opportunities do you see for making changes at this point? The AI Act is gradually coming into effect.
I would stop it and reverse what's already in place—or change it substantially. Because now we have legal uncertainty, which is why many are pushing for a pause in implementation. The text leaves a lot of room for interpretation. This creates a huge gray area in which large technology companies, in particular, will try to impose their views. Now the Commission is creating hundreds of additional pages of guidelines, codes of conduct, templates, etc. But that's not a solution. If anything, these documents lead to even more confusion. Because they are not legally binding and offer companies no assurance that they won't be sued.
What does this situation mean for AI companies in the EU?
It's sad. I'm a fan of regulation, but not of all regulation. Laws should be the rules of the game, within which the best in the market prevail. But vague laws like this achieve the opposite. Companies with great technology could be slowed down because they're overwhelmed by regulation and fear negative consequences. Some companies will stay away from AI altogether. Others will now hire lawyers, ensure they meet all the requirements on paper, and might even be able to gain an advantage from it. It's the opposite of the level playing field for everyone that I wanted to achieve.
nzz.ch